10 research outputs found

    Neural networks application to divergence-based passive ranging

    Get PDF
    The purpose of this report is to summarize the state of knowledge and outline the planned work in divergence-based/neural networks approach to the problem of passive ranging derived from optical flow. Work in this and closely related areas is reviewed in order to provide the necessary background for further developments. New ideas about devising a monocular passive-ranging system are then introduced. It is shown that image-plan divergence is independent of image-plan location with respect to the focus of expansion and of camera maneuvers because it directly measures the object's expansion which, in turn, is related to the time-to-collision. Thus, a divergence-based method has the potential of providing a reliable range complementing other monocular passive-ranging methods which encounter difficulties in image areas close to the focus of expansion. Image-plan divergence can be thought of as some spatial/temporal pattern. A neural network realization was chosen for this task because neural networks have generally performed well in various other pattern recognition applications. The main goal of this work is to teach a neural network to derive the divergence from the imagery

    Velocity filtering applied to optical flow calculations

    Get PDF
    Optical flow is a method by which a stream of two-dimensional images obtained from a forward-looking passive sensor is used to map the three-dimensional volume in front of a moving vehicle. Passive ranging via optical flow is applied here to the helicopter obstacle-avoidance problem. Velocity filtering is used as a field-based method to determine range to all pixels in the initial image. The theoretical understanding and performance analysis of velocity filtering as applied to optical flow is expanded and experimental results are presented

    Expansion-based passive ranging

    Get PDF
    This paper describes a new technique of passive ranging which is based on utilizing the image-plane expansion experienced by every object as its distance from the sensor decreases. This technique belongs in the feature/object-based family. The motion and shape of a small window, assumed to be fully contained inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix are derived by initially comparing successive images, and progressively increasing the image time separation so as to achieve much larger triangulation baseline than currently possible. Depth is directly derived from the expansion part of the transformation. To a first approximation, image-plane expansion is independent of image-plane location with respect to the focus of expansion (FOE) and of platform maneuvers. Thus, an expansion-based method has the potential of providing a reliable range in the difficult image area around the FOE. In areas far from the FOE the shift parameters of the affine transformation can provide more accurate depth information than the expansion alone, and can thus be used similarly to the way they have been used in conjunction with the Inertial Navigation Unit (INU) and Kalman filtering. However, the performance of a shift-based algorithm, when the shifts are derived from the affine transformation, would be much improved compared to current algorithms because the shifts--as well as the other parameters--can be obtained between widely separated images. Thus, the main advantage of this new approach is that, allowing the tracked window to expand and rotate, in addition to moving laterally, enables one to correlate images over a very long time span which, in turn, translates into a large spatial baseline resulting in a proportionately higher depth accuracy

    Expansion-based passive ranging

    Get PDF
    A new technique of passive ranging which is based on utilizing the image-plane expansion experienced by every object as its distance from the sensor decreases is described. This technique belongs in the feature/object-based family. The motion and shape of a small window, assumed to be fully contained inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix are derived by initially comparing successive images, and progressively increasing the image time separation so as to achieve much larger triangulation baseline than currently possible. Depth is directly derived from the expansion part of the transformation. To a first approximation, image-plane expansion is independent of image-plane location with respect to the focus of expansion (FOE) and of platform maneuvers. Thus, an expansion-based method has the potential of providing a reliable range in the difficult image area around the FOE. In areas far from the FOE the shift parameters of the affine transformation can provide more accurate depth information than the expansion alone, and can thus be used similarly to the way they were used in conjunction with the Inertial Navigation Unit (INU) and Kalman filtering. However, the performance of a shift-based algorithm, when the shifts are derived from the affine transformation, would be much improved compared to current algorithms because the shifts - as well as the other parameters - can be obtained between widely separated images. Thus, the main advantage of this new approach is that, allowing the tracked window to expand and rotate, in addition to moving laterally, enables one to correlate images over a very long time span which, in turn, translates into a large spatial baseline - resulting in a proportionately higher depth accuracy

    Obstacle detection by recognizing binary expansion patterns

    Get PDF
    This paper describes a technique for obstacle detection, based on the expansion of the image-plane projection of a textured object, as its distance from the sensor decreases. Information is conveyed by vectors whose components represent first-order temporal and spatial derivatives of the image intensity, which are related to the time to collision through the local divergence. Such vectors may be characterized as patterns corresponding to 'safe' or 'dangerous' situations. We show that essential information is conveyed by single-bit vector components, representing the signs of the relevant derivatives. We use two recently developed, high capacity classifiers, employing neural learning techniques, to recognize the imminence of collision from such patterns

    Error Analysis Of Combined Optical-Flow And Stereo Passive Ranging

    No full text
    The motion of an imaging sensor causes each imaged point of the scene to describe a time trajectory on the image plane. The trajectories of all imaged points are reminiscent of a flow (e.g., of liquid) which is the source of the term "optical flow". Optical-flow ranging is a method by which the stream of two-dimensional images obtained from a forward-looking forward-moving passive sensor is used to compute range to points in the field of view. Another well-known ranging method consists of triangulation based on stereo images obtained from at least two stationary sensors. In this paper we analyze the potential accuracies of a combined optical flow and stereo passive-ranging system in the context of helicopter nap-of-the-earth obstacle avoidance. The Cramer-Rao lower bound is developed for the combined system under the assumption of a random angular misalignment common to both cameras of a stereo pair. It is shown that the range accuracy degradations caused by misalignment is negligible ..

    CONTENTS 1

    No full text

    MTI Radar For Airline Operations

    No full text
    This report presents work performed under the general topic of High-Speed Research, Flight-Deck Systems. The sensor of choice is an Airborne Moving-Target-Indicator (AMTI) radar, which is a Pulse-Doppler radar equipped with AMTI signal processor. Our specific area of interest is detecting airborne obstacles, while the aircraft is in the phase of final approach for landing (or during takeoff). An adjunct area, due to be addressed under a different cover, is the detection of obstacles on the runway during the same phases of flight. The main impediment to successful obstacle detection is the interference of terrain clutter. This problem aside, even a non-AMTI radar is capable of detecting typical airborne obstacles ---moving or stationary (floating)--- at ranges in the order of 20 mi. In X-band frequencies, it is the ground clutter ---not receiver noise or weather attenuation--- against which the target signal has to compete. Depending on the geometry, this clutter is received through either the main antenna lobe or its sidelobes. To determine which antenna lobes contribute clutter, one has to find the ground intersection (for a flat terrain, it's a circle) with the sphere of radius equal to the target range. The angles from the antenna boresight to points on this line of intersection, and the antenna beam pattern, determine which lobes receive clutter. Therefore, assuming a horizontal antenna pointing, the higher the flight altitude, and the shorter the range, the farther (angularly) the deleterious sidelobes are from boresight ---meaning less clutter. This is why there is no much concern about obstacle detection during the cruise phase of the flight. In the approach-for-landing phase of the flight, the geometry becomes of concern because of the low-altitudes involved. It..

    Abstract PASSIVE RANGING USING IMAGE EXPANSION

    No full text
    This paper describes a new technique for passive ranging which is of special interest in areas such as covert nap-of-the-earth helicopter flight and spacecraft landing. This technique is based on the expansion experienced by the image-plane projection of an object as its distance from the sensor decreases. The motion and shape of a small window, assumed to fall inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix —expansion, rotation, and translation — are derived by initially comparing successive images, and progressively increasing the image time separation. This yields a more favorable geometry for triangulation (larger baseline) than is currently possible. Depth is directly derived from the expansion part of the transformation, and its accuracy is proportional to the baseline length. Keywords
    corecore